我们为大规模训练的大规模训练语言模型提供了更简单,更稀疏,更快的算法,这些算法在许多标准的NLP任务上实现了最新的隐私与实用性权衡。我们为此问题提出了一个元框架,这是受高度参数效率方法进行微调成功的启发。我们的实验表明,这些方法的差异化适应能力在三个重要方面优于以前的私人算法:实用程序,隐私以及私人培训的计算和记忆成本。在许多经常研究的数据集中,私人模型的实用性接近了非私人模型的方法。例如,在MNLI数据集上,我们使用Roberta-large的准确度为87.8 \%$,使用Roberta-Base $ 83.5 \%$,其隐私预算为$ \ Epsilon = 6.7 $。相比之下,缺乏隐私限制,罗伯塔·莱格(Roberta-Large)的准确度为$ 90.2 \%$。我们的发现对于自然语言生成任务类似。与DART,GPT-2-SMALL,GPT-2中,GPT-2-MEDIUM,GPT-2-LARGE和GPT-2-XL的私人微调达到38.5、42.0、43.1和43.8($ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 43.8) epsilon = 6.8,\ delta = $ 1E-5),而非私人基线为$ 48.1 $。我们所有的实验都表明,较大的模型更适合私人微调:虽然众所周知,它们旨在非优先实现卓越的准确性,但我们发现当引入隐私时,它们也更好地保持其准确性。
translated by 谷歌翻译
We propose JFP, a Joint Future Prediction model that can learn to generate accurate and consistent multi-agent future trajectories. For this task, many different methods have been proposed to capture social interactions in the encoding part of the model, however, considerably less focus has been placed on representing interactions in the decoder and output stages. As a result, the predicted trajectories are not necessarily consistent with each other, and often result in unrealistic trajectory overlaps. In contrast, we propose an end-to-end trainable model that learns directly the interaction between pairs of agents in a structured, graphical model formulation in order to generate consistent future trajectories. It sets new state-of-the-art results on Waymo Open Motion Dataset (WOMD) for the interactive setting. We also investigate a more complex multi-agent setting for both WOMD and a larger internal dataset, where our approach improves significantly on the trajectory overlap metrics while obtaining on-par or better performance on single-agent trajectory metrics.
translated by 谷歌翻译
The proliferation of radical online communities and their violent offshoots has sparked great societal concern. However, the current practice of banning such communities from mainstream platforms has unintended consequences: (I) the further radicalization of their members in fringe platforms where they migrate; and (ii) the spillover of harmful content from fringe back onto mainstream platforms. Here, in a large observational study on two banned subreddits, r/The\_Donald and r/fatpeoplehate, we examine how factors associated with the RECRO radicalization framework relate to users' migration decisions. Specifically, we quantify how these factors affect users' decisions to post on fringe platforms and, for those who do, whether they continue posting on the mainstream platform. Our results show that individual-level factors, those relating to the behavior of users, are associated with the decision to post on the fringe platform. Whereas social-level factors, users' connection with the radical community, only affect the propensity to be coactive on both platforms. Overall, our findings pave the way for evidence-based moderation policies, as the decisions to migrate and remain coactive amplify unintended consequences of community bans.
translated by 谷歌翻译
Small to medium-scale data science experiments often rely on research software developed ad-hoc by individual scientists or small teams. Often there is no time to make the research software fast, reusable, and open access. The consequence is twofold. First, subsequent researchers must spend significant work hours building upon the proposed hypotheses or experimental framework. In the worst case, others cannot reproduce the experiment and reuse the findings for subsequent research. Second, suppose the ad-hoc research software fails during often long-running computationally expensive experiments. In that case, the overall effort to iteratively improve the software and rerun the experiments creates significant time pressure on the researchers. We suggest making caching an integral part of the research software development process, even before the first line of code is written. This article outlines caching recommendations for developing research software in data science projects. Our recommendations provide a perspective to circumvent common problems such as propriety dependence, speed, etc. At the same time, caching contributes to the reproducibility of experiments in the open science workflow. Concerning the four guiding principles, i.e., Findability, Accessibility, Interoperability, and Reusability (FAIR), we foresee that including the proposed recommendation in a research software development will make the data related to that software FAIRer for both machines and humans. We exhibit the usefulness of some of the proposed recommendations on our recently completed research software project in mathematical information retrieval.
translated by 谷歌翻译
Artificial Intelligence (AI) is having a tremendous impact across most areas of science. Applications of AI in healthcare have the potential to improve our ability to detect, diagnose, prognose, and intervene on human disease. For AI models to be used clinically, they need to be made safe, reproducible and robust, and the underlying software framework must be aware of the particularities (e.g. geometry, physiology, physics) of medical data being processed. This work introduces MONAI, a freely available, community-supported, and consortium-led PyTorch-based framework for deep learning in healthcare. MONAI extends PyTorch to support medical data, with a particular focus on imaging, and provide purpose-specific AI model architectures, transformations and utilities that streamline the development and deployment of medical AI models. MONAI follows best practices for software-development, providing an easy-to-use, robust, well-documented, and well-tested software framework. MONAI preserves the simple, additive, and compositional approach of its underlying PyTorch libraries. MONAI is being used by and receiving contributions from research, clinical and industrial teams from around the world, who are pursuing applications spanning nearly every aspect of healthcare.
translated by 谷歌翻译
在线平台面临着保持社区民用和尊重的压力。因此,从Reddit和Facebook等主流平台上有问题的在线社区的横幅通常会受到热情的公共反应。但是,该策略可以导致用户迁移到具有较低适度标准的替代边缘平台,以及在巨魔和骚扰等反社会行为被广泛接受的地方。由于这些社区的用户经常在主流和边缘平台上保留\ ca,反社会行为可能会溢出到主流平台上。我们通过分析来自迁移到边缘平台的三个被禁止社区的70,000美元的用户来研究这一可能的溢出:r/the \ _donald,r/r/gendericalitical和r/incels。使用差异差异设计,我们将\ CA用户与匹配的对应物进行了对比,以估算边缘平台参与用户对Reddit的反社会行为的因果效应。我们的结果表明,参与边缘社区会增加用户对Reddit的毒性(按照视角API的衡量),并参与了类似于被禁止社区的子雷数 - 这通常也违反了平台规范。效果随着时间的流逝和暴露于边缘平台而加剧。简而言之,我们发现通过共同参与从边缘平台到Reddit的反社会行为溢出的证据。
translated by 谷歌翻译
关于使用ML模型的一个基本问题涉及其对提高决策透明度的预测的解释。尽管已经出现了几种可解释性方法,但已经确定了有关其解释可靠性的一些差距。例如,大多数方法都是不稳定的(这意味着它们在数据中提供了截然不同的解释),并且不能很好地应对无关的功能(即与标签无关的功能)。本文介绍了两种新的可解释性方法,即Varimp和Supclus,它们通过使用局部回归拟合的加权距离来克服这些问题,以考虑可变重要性。 Varimp生成了每个实例的解释,可以应用于具有更复杂关系的数据集,而Supclus解释了具有类似说明的实例集群,并且可以应用于可以找到群集的较简单数据集。我们将我们的方法与最先进的方法进行了比较,并表明它可以根据几个指标产生更好的解释,尤其是在具有无关特征的高维问题中,以及特征与目标之间的关系是非线性的。
translated by 谷歌翻译
不确定性的量化对于采用机器学习至关重要,尤其是拒绝分布(OOD)数据回到人类专家进行审查。然而,进步一直很慢,因为计算效率和不确定性估计质量之间必须达到平衡。因此,许多人使用神经网络或蒙特卡洛辍学的深层集合来进行相对最小的计算和记忆时合理的不确定性估计。出乎意料的是,当我们专注于$ \ leq 1 \%$ frese-falds正率(FPR)的现实世界中的约束时,先前的方法无法可靠地检测到OOD样本。值得注意的是,即使高斯随机噪声也无法触发这些流行的OOD技术。我们通过设计一种简单的对抗训练计划来帮助缓解这个问题,该计划结合了辍学合奏所预测的认知不确定性的攻击。我们证明了这种方法可以改善标准数据(即未经对抗制作)上的OOD检测性能,并将标准化的部分AUC从近乎随机的猜测性能提高到$ \ geq 0.75 $。
translated by 谷歌翻译
鉴于完整的指纹图像(滚动或拍打),我们介绍了Cyclegan模型,以生成与完整印刷相同身份的多个潜在印象。我们的模型可以控制生成的潜在打印图像中的失真,噪声,模糊和遮挡程度,以获得NIST SD27潜在数据库中介绍的好,坏和丑陋的潜在图像类别。我们的工作的贡献是双重的:(i)证明合成生成的潜在指纹图像与NIST SD27和MSP数据库中的犯罪现场潜伏期的相似性,并由NIST NIST NFIQ 2质量度量和由SOTA指纹匹配器和ROC曲线评估。 (ii)使用合成潜伏期在公共领域增强小型的潜在训练数据库,以提高Deepprint的性能,Deepprint是一种SOTA指纹匹配器,设计用于在三个潜在数据库上滚动的指纹匹配(NIST SD27,NIST SD302和IIITD,以及IIITD,以及IIITD,以及IIITD,以及-slf)。例如,随着合成潜在数据的增强,在具有挑战性的NIST SD27潜在数据库中,Deepprint的排名1检索性能从15.50%提高到29.07%。我们生成合成潜在指纹的方法可用于改善任何潜在匹配器及其单个组件的识别性能(例如增强,分割和特征提取)。
translated by 谷歌翻译
实际应用程序中使用的答案集程序通常要求该程序可与不同的输入数据一起使用。但是,这通常会导致矛盾的陈述,从而导致不一致的程序。计划中潜在矛盾的原因是相互矛盾的规则。在本文中,我们展示了如何确保程序$ \ mathcal {p} $在给定任何允许的输入数据的情况下仍然是无偶数的。为此,我们介绍了解决冲突的$ \ lambda $ - 扩展名的概念。解决冲突规则$ r $的解决冲突的$ \ lambda $ - 是(默认)文字的设置$ \ lambda $,使得将$ r $的$ r $ ty $ \ lambda $延长到$ \ lambda $解决所有冲突$ r $的所有冲突立刻。我们调查了合适的$ \ lambda $ - 扩展应具有并在此基础上建立的属性,我们制定了一种策略,以计算每个相互冲突的$ \ lambda $ - extensions in $ \ Mathcal {p} $中的每个冲突规则。我们表明,通过实施冲突解决过程,该过程使用$ \ lambda $ extensions连续解决冲突,最终产生了一个程序,该程序在给定任何允许的输入数据的情况下仍然是非矛盾的。
translated by 谷歌翻译